Skip to content

Conversation

@aThorp96
Copy link
Contributor

@aThorp96 aThorp96 commented Feb 2, 2026

The previous image, quay.io/openshift-pipeline/pipelines-serve-tkn-cli-rhel9@sha256:ac27f336d3e4e9e2538a867c1a956ca8328428948dd72d9ad6456c346ab48989 seems to have disappeared from quay.io. While we investigate the issue and rollout the next release, this should get CI working again by overriding the image to a more recent version. Since tkn-cli-serve is unused by Konflux, this just needs to exist so the deployment can succeed

@openshift-ci openshift-ci bot requested review from Roming22 and enarha February 2, 2026 13:18
@openshift-ci openshift-ci bot added the approved label Feb 2, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Feb 2, 2026

🤖 Gemini AI Assistant Available

Hi @aThorp96! I'm here to help with your pull request. You can interact with me using the following commands:

Available Commands

  • @gemini-cli /review - Request a comprehensive code review

    • Example: @gemini-cli /review Please focus on security and performance
  • @gemini-cli <your question> - Ask me anything about the codebase

    • Example: @gemini-cli How can I improve this function?
    • Example: @gemini-cli What are the best practices for error handling here?

How to Use

  1. Simply type one of the commands above in a comment on this PR
  2. I'll analyze your code and provide detailed feedback
  3. You can track my progress in the workflow logs

Permissions

Only OWNER, MEMBER, or COLLABORATOR users can trigger my responses. This ensures secure and appropriate usage.


This message was automatically added to help you get started with the Gemini AI assistant. Feel free to delete this comment if you don't need assistance.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 2, 2026

🤖 Hi @aThorp96, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@infernus01
Copy link
Member

/lgtm

@infernus01
Copy link
Member

/approve

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The E2E tests timed out due to prolonged application deployment and synchronization processes, potentially exacerbated by underlying Argo CD ApplicationSet synchronization issues and Tekton component errors.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed because the E2E tests exceeded the 2-hour execution timeout. The process was terminated after a grace period, indicating that the underlying operations did not complete within the expected timeframe.

Contributing Factors

Several factors may have contributed to the prolonged execution:

  • Argo CD ApplicationSet Synchronization Issues: Artifacts show that several Argo CD ApplicationSets (build-service, cert-manager, etc.) are in an 'OutOfSync' state with a 'Missing' health status. This indicates that the desired application state is not being achieved, which could lead to deployment delays or failures.
  • Tekton Component Errors: The tektonaddons and tektonconfigs resources are in an 'Error' state, with specific messages indicating that components like tkn-cli-serve are not ready. This could impact the functionality of the CI/CD pipeline, including deployment and testing processes.
  • Potential for long-running deployments: The E2E tests rely on the successful deployment and synchronization of applications managed by Argo CD. If these deployments are inherently slow or encountering issues, they can easily exceed time limits.

Impact

The timeout prevented the completion of the E2E tests, meaning the pipeline could not validate the functionality and stability of the deployed application infrastructure. This failure blocks the main branch integration and deployment verification.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The E2E tests failed due to a timeout. This was likely caused by a prolonged execution of the application deployment and synchronization processes managed by Argo CD, which failed to complete within the allocated time limit.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:2718
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-02T15:22:19Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:2723
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-02T15:22:34Z"}

Analysis powered by prow-failure-analysis | Build: 2018313274809913344

@aThorp96
Copy link
Contributor Author

aThorp96 commented Feb 2, 2026

/retest

@openshift-ci openshift-ci bot removed the lgtm label Feb 2, 2026
@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

Pipeline job failed due to the end-to-end tests exceeding the allocated timeout.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed because the test execution exceeded the configured 2-hour timeout. The process did not terminate within the expected timeframe and also failed to exit within the subsequent 15-second grace period.

Contributing Factors

While no specific test failures were reported, the additional_context reveals several ArgoCD ApplicationSets in an OutOfSync state and Tekton-related components (TektonAddon, TektonConfig) in an error state. These underlying cluster or deployment issues might have contributed to a slower-than-expected test execution or resource contention, indirectly leading to the timeout.

Impact

The timeout failure prevented the completion of the end-to-end test suite for this pull request, meaning the quality and correctness of the infrastructure changes could not be validated. This blocked the progression of the CI/CD pipeline for the affected changes.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests exceeded the allocated timeout of 2 hours, and the process failed to terminate gracefully, indicating a potential deadlock or an excessively long-running test case.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1098
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-02T22:57:46Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1103
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-02T22:58:01Z"}

Analysis powered by prow-failure-analysis | Build: 2018427882870673408

@openshift-ci
Copy link

openshift-ci bot commented Feb 2, 2026

@aThorp96: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/appstudio-e2e-tests a198c5c link true /test appstudio-e2e-tests

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@flacatus
Copy link
Contributor

flacatus commented Feb 3, 2026

/lgtm
/approve

@openshift-ci
Copy link

openshift-ci bot commented Feb 3, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aThorp96, flacatus, infernus01

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the lgtm label Feb 3, 2026
@openshift-ci openshift-ci bot removed the lgtm label Feb 3, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 3, 2026

New changes are detected. LGTM label has been removed.

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

Pipeline failed due to persistent DNS resolution errors preventing the cluster from being reached by diagnostic and test execution steps.

📋 Technical Details

Immediate Cause

Several diagnostic and data collection steps, including appstudio-e2e-tests/gather-audit-logs, appstudio-e2e-tests/gather-extra, appstudio-e2e-tests/gather-must-gather, and appstudio-e2e-tests/redhat-appstudio-gather, failed due to an inability to resolve the DNS name of the Kubernetes API server (api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com). This resulted in "no such host" errors when attempting network connections.

Contributing Factors

The primary test execution step, appstudio-e2e-tests/redhat-appstudio-e2e, was terminated because its entrypoint process did not exit gracefully within the allocated time. This termination is likely a downstream effect of the underlying network and DNS issues, which prevented the test environment from functioning correctly or communicating with the cluster. The must-gather tool also reported I/O timeouts in addition to DNS resolution failures, suggesting broader network instability.

Impact

The inability to resolve the Kubernetes API server's DNS name prevented the successful execution of critical data gathering steps, significantly hindering the ability to diagnose further issues. It also directly contributed to the termination of the main e2e test execution, causing the entire pipeline run to fail.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the DNS name of the Kubernetes API server, preventing it from collecting audit logs. This indicates a potential network or DNS configuration problem within the cluster environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 4
[must-gather      ] OUT 2026-02-03T17:21:52.70435508Z Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 17
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 21
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 26
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 74
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 78
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 82
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The gather-extra step failed because it could not resolve the DNS name of the Kubernetes API server (api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com). This indicates a network or DNS configuration problem in the environment where the job is running.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 4
E0203 17:21:44.831543      28 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 13
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is due to network connectivity issues between the must-gather pod and the OpenShift API server, manifesting as TCP timeouts and DNS resolution failures.

Logs:

artifacts/appstudio-e2e-tests~gather-must-gather/run.log line 13
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests~gather-must-gather/run.log line 23
E0203 17:20:58.334551      54 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests~gather-must-gather/run.log line 39
error running backup collection: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests~gather-must-gather/run.log line 40
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The make command was terminated due to the entrypoint process not exiting gracefully within the allowed grace period. This suggests an underlying issue with the execution environment or the long-running setup processes, potentially leading to resource exhaustion or unresponsiveness that triggered the termination.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 593
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 590
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-03T17:15:44Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 597
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T17:15:59Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 599
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-03T17:15:59Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The step failed because it was unable to resolve the DNS name of the Kubernetes API server. This indicates a network or DNS configuration issue preventing the oc client from reaching the cluster.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/run.log:44
E0203 17:21:59.750553      56 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/run.log:1727
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/run.log:2899
error running backup collection: Get "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/run.log:2900
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-cq4ws.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018708124017364992

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

Pipeline failed due to intermittent DNS resolution errors preventing access to the Kubernetes API server, which impacted multiple diagnostic and deployment steps.

📋 Technical Details

Immediate Cause

Multiple steps, including appstudio-e2e-tests/gather-audit-logs, appstudio-e2e-tests/gather-extra, appstudio-e2e-tests/gather-must-gather, and appstudio-e2e-tests/redhat-appstudio-gather, failed because they could not resolve the hostname of the Kubernetes API server (api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com). The error messages consistently show dial tcp: lookup ... on 172.30.0.10:53: no such host, indicating a DNS resolution failure.

Contributing Factors

The appstudio-e2e-tests/redhat-appstudio-e2e step encountered a 502 Bad Gateway when attempting to pull the Loki Helm chart. While this is a separate issue, the underlying cluster network instability suggested by the DNS failures might have also contributed to the unavailability of the Helm chart repository or related services. The must-gather artifact further corroborates the DNS resolution problem.

Impact

The inability to resolve the Kubernetes API server's hostname prevented these diagnostic steps from collecting essential cluster information and configurations. This lack of diagnostic data could hinder debugging efforts for any other potential issues. Furthermore, it highlights a critical infrastructure instability within the testing environment that could affect the reliability of all subsequent tests and deployments.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the hostname of the Kubernetes API server, indicating a DNS resolution or network connectivity problem within the cluster environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log:15
[must-gather      ] OUT 2026-02-03T18:13:35.204066212Z Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log:45
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log:53
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log:64
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log:83
error running backup collection: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The gather-extra step failed because it could not resolve the DNS name of the Kubernetes API server. This suggests a problem with network connectivity, DNS configuration, or the cluster's API endpoint availability.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 2
E0203 18:13:27.762920      29 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 7
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution problem within the cluster, preventing the oc adm must-gather command from connecting to the OpenShift API server.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 14
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 20
E0203 18:13:14.292725      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 35
error running backup collection: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 36
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The failure occurred because the helm pull command for the Loki chart (https://grafana.github.io/helm-charts/loki-6.49.0.tgz) returned a 502 Bad Gateway error. This indicates an issue with the availability or network connectivity to the Helm chart repository, which prevented the successful deployment of the vector-kubearchive-log-collector component. The overall step was terminated due to a prolonged failure and grace period timeout.

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution error when trying to connect to the OpenShift API server. The hostname api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com could not be found, indicating a potential issue with network configuration or DNS availability for the cluster.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/oc_command.log
E0203 18:13:44.269196      66 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/oc_command.log
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/oc_command.log
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zrc6s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018735172823814144

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants